Goto

Collaborating Authors

 static input


Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input

Neural Information Processing Systems

Equilibrium Propagation (EP) is a biologically inspired learning algorithm for convergent recurrent neural networks, i.e. RNNs that are fed by a static input x and settle to a steady state. Training convergent RNNs consists in adjusting the weights until the steady state of output neurons coincides with a target y. Convergent RNNs can also be trained with the more conventional Backpropagation Through Time (BPTT) algorithm. In its original formulation EP was described in the case of real-time neuronal dynamics, which is computationally costly.


Compositional Symmetry as Compression: Lie Pseudogroup Structure in Algorithmic Agents

Ruffini, Giulio

arXiv.org Artificial Intelligence

In the algorithmic (Kolmogorov) view, agents are programs that track and compress sensory streams using generative programs. We propose a framework where the relevant structural prior is simplicity (Solomonoff) understood as \emph{compositional symmetry}: natural streams are well described by (local) actions of finite-parameter Lie pseudogroups on geometrically and topologically complex low-dimensional configuration manifolds (latent spaces). Modeling the agent as a generic neural dynamical system coupled to such streams, we show that accurate world-tracking imposes (i) \emph{structural constraints} -- equivariance of the agent's constitutive equations and readouts -- and (ii) \emph{dynamical constraints}: under static inputs, symmetry induces conserved quantities (Noether-style labels) in the agent dynamics and confines trajectories to reduced invariant manifolds; under slow drift, these manifolds move but remain low-dimensional. This yields a hierarchy of reduced manifolds aligned with the compositional factorization of the pseudogroup, providing a geometric account of the ``blessing of compositionality'' in deep models. We connect these ideas to the Spencer formalism for Lie pseudogroups and formulate a symmetry-based, self-contained version of predictive coding in which higher layers receive only \emph{coarse-grained residual transformations} (prediction-error coordinates) along symmetry directions unresolved at lower layers.


Reviews: Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input

Neural Information Processing Systems

The authors first introduce a discrete-time version of equilibrium propagation (EP). They show the equivalence of EP with backpropagation through time (BPTT). They also apply it to a CNN (first time). They show step-by-step equality under certain conditions. All reviewers agree that the results are original, the quality and clarity of the paper is high, and the results are very significant for the NeurIPS community, in particular to researchers interested in biologically plausible replacements of backpropagation.


Reviews: Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input

Neural Information Processing Systems

The manuscript describes a discrete time reduction of equilibrium prop (EP) which enables the authors to compare the algorithms gradient estimates and performance directly to BPTT. Moreover, the associated reduction in compute cost also enables them to train the first CNN (that I know of) using EP. While EP approximates BP in feedforward networks, it uses neuronal activity of an RNN in equilibrium to propagate target information or error feedback to perform credit assignment. While this work may be less interesting for DL practitioners because it is still more costly than backprop (BP), it is one of the contenders for bio-plausible backprop which is discussed in the literature. In that regard the present work contributes to this discussion meaningfully.


Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input

Neural Information Processing Systems

Equilibrium Propagation (EP) is a biologically inspired learning algorithm for convergent recurrent neural networks, i.e. RNNs that are fed by a static input x and settle to a steady state. Training convergent RNNs consists in adjusting the weights until the steady state of output neurons coincides with a target y. Convergent RNNs can also be trained with the more conventional Backpropagation Through Time (BPTT) algorithm. In its original formulation EP was described in the case of real-time neuronal dynamics, which is computationally costly.


Hidden Latent State Inference in a Spatio-Temporal Generative Model

Karlbauer, Matthias, Menge, Tobias, Otte, Sebastian, Lensch, Hendrik P. A., Scholten, Thomas, Wulfmeyer, Volker, Butz, Martin V.

arXiv.org Machine Learning

Knowledge of the hidden factors that determine particular system dynamics is crucial for both explaining them and pursuing goal-directed, interventional actions. The inference of these factors without supervision given time series data remains an open challenge. Here, we focus on spatio-temporal processes, including wave propagations and weather dynamics, and assume that universal causes (e.g. physics) apply throughout space and time. We apply a novel DIstributed, Spatio-Temporal graph Artificial Neural network Architecture, DISTANA, which learns a generative model in such domains. DISTANA requires fewer parameters, and yields more accurate predictions than temporal convolutional neural networks and other related approaches on a 2D circular wave prediction task. We show that DISTANA, when combined with a retrospective latent state inference principle called active tuning, can reliably derive hidden local causal factors. In a current weather prediction benchmark, DISTANA infers our planet's land-sea mask solely by observing temperature dynamics and uses the self inferred information to improve its own prediction of temperature. We are convinced that the retrospective inference of latent states in generative RNN architectures will play an essential role in future research on causal inference and explainable systems.


Building a super-resolution image web-app

#artificialintelligence

I made this app, as my pilot task for Tessellate coding. The task included finding a suitable model, making the inference algorithm, wrapping it in a REST API, and finally dockerizing the application. For the task, I used Keras with a Tensorflow backend and Flask. This blog is about the same challenges I faced in the task, and how to overcome them when you are making your project. For the model, I researched a bit on the topic of the super-resolution of the image, and found the SRCNN model.


Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input

Ernoult, Maxence, Grollier, Julie, Querlioz, Damien, Bengio, Yoshua, Scellier, Benjamin

Neural Information Processing Systems

Equilibrium Propagation (EP) is a biologically inspired learning algorithm for convergent recurrent neural networks, i.e. RNNs that are fed by a static input x and settle to a steady state. Training convergent RNNs consists in adjusting the weights until the steady state of output neurons coincides with a target y. Convergent RNNs can also be trained with the more conventional Backpropagation Through Time (BPTT) algorithm. In its original formulation EP was described in the case of real-time neuronal dynamics, which is computationally costly.